Dr. Marcel Moritz, Associate Professor of Public Law at the University of Lille, has called for comprehensive legal regulation of artificial intelligence (AI), underscoring the urgent need to address concerns of fairness, bias, and liability as technology becomes increasingly integrated into everyday life.
Speaking at the 6th Eminent Legal Scholars and Lawyers Public Lecture organised by the Faculty of Law, Dr. Moritz emphasised that while AI brings innovation and convenience, it also raises pressing legal questions that must not be ignored.
He highlighted the legal ambiguity surrounding AI, particularly concerning the origin and integrity of the data used to train these systems.
“Artificial intelligence relies heavily on data. Where does the training data come from? What are the biases? These are questions the law must address.”
According to Dr. Moritz, the challenges posed by AI are global and not limited to any one country.
“We have the same challenges all around the world. It’s just that we are now discovering them because we are using AI more. We need to find solutions to mitigate the risks. “AI is not magical. It uses databases to create results. If the data contains inaccuracies, the results will be poor. People must be aware of that,” he said.
Dr. Moritz pointed out recent European developments, referencing two separate AI regulations, one from the Council of Europe with a human rights-based approach, and another from the European Union, which is business centred.
“These regulations were adopted last year, and they can serve as inspiration for other countries,” he stated.

The Vice-Chancellor Professor (Mrs.) Rita Akosua Dickson emphasised the timeliness of the lecture.
“AI is right in our communities with us. It is an active reality that is reshaping the way we work, communicate, and govern, and how we understand human rights and responsibilities.”
She stressed the need for responsible regulation to ensure that AI systems do not compromise human rights, dignity, or the rule of law.
“It is necessary to pay attention to how we can responsibly regulate these AI systems in such a way that human rights, dignity, and the rule of law do not suffer.”
Prof. Dickson also called on higher education institutions to champion the development of ethical AI technologies that align with human rights and societal values.
“There is the need for robust legal frameworks to ensure that technology serves society and communities in a more ethical and responsible manner. As a higher education institution, we must ensure that the ethical AI technologies and practices we submit to align with human rights and societal values.”
As Africa prepares for an AI-driven future, she noted the importance of building an AI-ready workforce and developing comprehensive legal responses to AI-related policies.
She mentioned that Ghana is already taking bold steps towards this goal through the development of a National AI Strategy, an initiative assisted by the Responsible AI Lab (RAIL) at KNUST, under the leadership of Professor Jerry John Kponyo.

The Acting Dean of the Faculty of Law Dr. Chris Adomako-Kwakye expressed appreciation for the insights shared and emphasised the importance of such academic engagements in shaping Africa’s AI policy and legal landscape.